9 research outputs found

    Monitoring and Controlling Phone Usage to Raise Awareness and Combat Digital Addiction.

    Get PDF
    One of the defining factors in human progress is the fact how humans have adopted technology into their everyday lives. One of these technologies that has seen a tremendous increase in usage is the mobile phone. The potential overuse of a smartphone device is very easily done, with many possible bad psychological side effects. Digital addiction is a form of addiction that has become more prevalent with people due to the ever-growing technological advances that our devices have achieved. This work focuses on what could be done to assist people via a software application who either have the addiction or help prevent people from becoming addicted. This paper presents design and implementation of a mobile application to monitor and control the phone usage so that it can help combat digital addiction. The prototype implementation lets user see how much time they use on their phone as well as set some preferences. The study has been evaluated by user testing and having user feedback

    Culturally-Aware Motivation for Smart Services: An Exploratory Study of the UAE

    Get PDF
    The adoption of smart services could be challenging despite the benefits they can offer in terms of ubiquity and intelligence. The main reason for that is the relatively difficult administration of the services on smart phones and the availability of other options including the in-person and desktop services. However, the design of smart services could be more proactive and attract users’ adoption through the use of persuasive and motivational techniques. These techniques should be culturally-sensitive. This paper reviews the use of Cialdini six principles of influence in the cultural context of UAE, and it assesses how they should be used to increase the adoption of smart services. As a method, we conducted in-depth interviews with ten experts in various domains; including marketing and customer services; in the UAE. We report on the potential and the adverse effects and identify context specific factors of the use of these principles in the context of the UAE aiming to give the management of software-based motivation a starting point for their design and evaluation

    A Process Model for Component-Based Model-Driven Software Development

    Get PDF
    Developing high quality, reliable and on time software systems is challenging due to the increasing size and complexity of these systems. Traditional software development approaches are not suitable for dealing with such challenges, so several approaches have been introduced to increase the productivity and reusability during the software development process. Two of these approaches are Component-Based Software Engineering (CBSE) and Model-Driven Software Development (MDD) which focus on reusing pre-developed code and using models throughout the development process respectively. There are many research studies that show the benefits of using software components and model-driven approaches. However, in many cases the development process is either ad-hoc or not well-defined. This paper proposes a new software development process model that merges CBSE and MDD principles to facilitate software development. The model is successfully tested by applying it to the development of an e-learning system as an exemplar case stud

    A Survey on the Usability and User Experience of the Open Community Web Portals

    Get PDF
    Web-based portals enable a new communication paradigm that could provide variety of benefits and support to both the customers and companies. Customers can have continuous access to the services, information, support, and payments on the portal with the possibility of personalisation. This paper presents a survey on the usability and user experience studies relevant to open community web portals and information sharing platforms. The objective of the work presented in this paper was to produce an overview of how literature reported on usability in relation to information sharing web portals. A systematic mapping method has been applied to identify and quantify primary studies focusing on the usability and user experience of the open community web portals

    Data cleaning techniques for software engineering data sets

    Get PDF
    Data quality is an important issue which has been addressed and recognised in research communities such as data warehousing, data mining and information systems. It has been agreed that poor data quality will impact the quality of results of analyses and that it will therefore impact on decisions made on the basis of these results. Empirical software engineering has neglected the issue of data quality to some extent. This fact poses the question of how researchers in empirical software engineering can trust their results without addressing the quality of the analysed data. One widely accepted definition for data quality describes it as `fitness for purpose', and the issue of poor data quality can be addressed by either introducing preventative measures or by applying means to cope with data quality issues. The research presented in this thesis addresses the latter with the special focus on noise handling. Three noise handling techniques, which utilise decision trees, are proposed for application to software engineering data sets. Each technique represents a noise handling approach: robust filtering, where training and test sets are the same; predictive filtering, where training and test sets are different; and filtering and polish, where noisy instances are corrected. The techniques were first evaluated in two different investigations by applying them to a large real world software engineering data set. In the first investigation the techniques' ability to improve predictive accuracy in differing noise levels was tested. All three techniques improved predictive accuracy in comparison to the do-nothing approach. The filtering and polish was the most successful technique in improving predictive accuracy. The second investigation utilising the large real world software engineering data set tested the techniques' ability to identify instances with implausible values. These instances were flagged for the purpose of evaluation before applying the three techniques. Robust filtering and predictive filtering decreased the number of instances with implausible values, but substantially decreased the size of the data set too. The filtering and polish technique actually increased the number of implausible values, but it did not reduce the size of the data set. Since the data set contained historical software project data, it was not possible to know the real extent of noise detected. This led to the production of simulated software engineering data sets, which were modelled on the real data set used in the previous evaluations to ensure domain specific characteristics. These simulated versions of the data set were then injected with noise, such that the real extent of the noise was known. After the noise injection the three noise handling techniques were applied to allow evaluation. This procedure of simulating software engineering data sets combined the incorporation of domain specific characteristics of the real world with the control over the simulated data. This is seen as a special strength of this evaluation approach. The results of the evaluation of the simulation showed that none of the techniques performed well. Robust filtering and filtering and polish performed very poorly, and based on the results of this evaluation they would not be recommended for the task of noise reduction. The predictive filtering technique was the best performing technique in this evaluation, but it did not perform significantly well either. An exhaustive systematic literature review has been carried out investigating to what extent the empirical software engineering community has considered data quality. The findings showed that the issue of data quality has been largely neglected by the empirical software engineering community. The work in this thesis highlights an important gap in empirical software engineering. It provided clarification and distinctions of the terms noise and outliers. Noise and outliers are overlapping, but they are fundamentally different. Since noise and outliers are often treated the same in noise handling techniques, a clarification of the two terms was necessary. To investigate the capabilities of noise handling techniques a single investigation was deemed as insufficient. The reasons for this are that the distinction between noise and outliers is not trivial, and that the investigated noise cleaning techniques are derived from traditional noise handling techniques where noise and outliers are combined. Therefore three investigations were undertaken to assess the effectiveness of the three presented noise handling techniques. Each investigation should be seen as a part of a multi-pronged approach. This thesis also highlights possible shortcomings of current automated noise handling techniques. The poor performance of the three techniques led to the conclusion that noise handling should be integrated into a data cleaning process where the input of domain knowledge and the replicability of the data cleaning process are ensured.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A Mobile Phone App for the Provision of Personalized Food-Based Information in an Eating-Out Situation: Development and Initial Evaluation.

    Get PDF
    BACKGROUND: Increasing pressure from governments, public health bodies, and consumers is driving a need for increased food-based information provision in eating-out situations. Meals eaten outside the home are known to be less healthy than meals eaten at home, and consumers can complain of poor information on the health impact and allergen content of meals eaten out. OBJECTIVE: This paper aimed to describe the development and early assessment of a mobile phone app that allows the provision of accurate personalized food-based information while considering individual characteristics (allergies, diet type, and preferences) to enable informed consumer choice when eating out. METHODS: An app was designed and developed to address these requirements using an agile approach. The developed app was then evaluated at 8 public engagement events using the System Usability Scale (SUS) questionnaire and qualitative feedback. RESULTS: Consideration of the literature and consultation with consumers revealed a need for information provision for consumers in the eating-out situation, including the ability to limit the information provided to that which was personally relevant or interesting. The app was designed to provide information to consumers on the dishes available in a workplace canteen and to allow consumers the freedom to personalize the app and choose the information that they received. Evaluation using the SUS questionnaire revealed positive responses to the app from a range of potential users, and qualitative comments demonstrated broad interest in its use. CONCLUSIONS: This paper details the successful development and early assessment of a novel mobile phone app designed to provide food-based information in an eating-out situation in a personalized manner

    Abstract

    No full text
    OBJECTIVE- The aim is to report upon an assessment of the impact noise has on the predictive accuracy by comparing noise handling techniques. METHOD- We describe the process of cleaning a large software management dataset comprising initially of more than 10,000 projects. The data quality is mainly assessed through feedback from the data provider and manual inspection of the data. Three methods of noise correction (polishing, noise elimination and robust algorithms) are compared with each other assessing their accuracy. The noise detection was undertaken by using a regression tree model. RESULTS- Three noise correction methods are compared and different results in their accuracy where noted. CONCLUSIONS- The results demonstrated that polishing improves classification accuracy compared to noise elimination and robust algorithms approaches

    Evaluating Three Approaches to Extracting Fault Data from Software Change Repositories

    No full text
    Software products can only be improved if we have a good understanding of the faults they typically contain. Code faults are a significant source of software product problems which we currently do not understand sufficiently. Open source change repositories are potentially a rich and valuable source of fault data for both researchers and practitioners. Such fault data can be used to better understand current product problems so that we can predict and address future product problems. However extracting fault data from change repositories is difficult. In this paper we compare the performance of three approaches to extracting fault data from the change repository of the Barcode Open Source System. Our main findings are that we have most confidence in our manual evaluation of diffs to identify fault fixing changes. We had less confidence in the ability of the two automatic approaches to separate fault fixing from non-fault fixing changes. We conclude that it is very difficult to reliably extract fault fixing data from change repositories, especially using automatic tools and that we need to be cautious when reporting or using such data
    corecore